Do you share your personal life with ChatGPT or confide details? Well, according to Oxford Professor Mike Wooldridge, it may not be the best idea. Here's why.News 

Oxford Professor Urges Caution in Sharing Personal Information with Chatbots

2023 has been the year of generative AI chatbots and the large language models (LLM) behind them. These models have grown exponentially, and some AI experts are concerned that this could be dangerous for humanity. They argue that things should not be processed at this speed, especially without supervision. However, experts have expressed concern about artificial intelligence becoming understanding and thinking of its own, but little has been said about how personal a chatbot can become.

But now Oxford University artificial intelligence professor Mike Wooldridge has warned users of AI chatbots to be careful about what they share. Simply put, he said, you shouldn’t share personal and sensitive information with chatbots, like your political views or how angry you are with your boss at work. This can lead to bad consequences and is “very unwise”.

As The Guardian discovered, this is partly because he believes these chatbots don’t provide a balanced response; Instead, this technology “tells you what you want to hear.”

“Technology is basically designed to try to tell you what you want to hear — that’s literally all it does,” he said.

He also added that when you enter personal information into a chatbot like ChatGPT, that information is usually “fed directly into future versions of ChatGPT,” meaning it can be used to train the AI chatbot’s generative AI model. And of course you can’t pull something back after feeding. However, it’s worth noting that OpenAI added an option for users to turn off their chat history in ChatGPT, which means their conversations won’t be used to train AI models.

In related news, Nithin Kamath, CEO and co-founder of Zerodha, also warned about the significant risk associated with AI and deep counterfeiting and how it could affect financial institutions. He noted that there are “checks to check for liveness and whether the other person is real or not”, but due to the rapid development of AI and deep fakes, it is becoming increasingly difficult to confirm whether a given person is real or an AI. – created.

Related posts

Leave a Comment